This dataset is available through R, in the package “isdals”. It comes from the book Applied Linear Regression of S. Weisberg (1985). It is made up o 19 rows (rats) and four columns giving information about the rat’s weight, the rat’s liver weight, the dose of drug given to the rat and, finally, the amount of drug found in the liver of the rat.
| BodyWt | LiverWt | Dose | DoseInLiver |
|---|---|---|---|
| 176 | 6.5 | 0.88 | 0.42 |
| 176 | 9.5 | 0.88 | 0.25 |
| 190 | 9.0 | 1.00 | 0.56 |
| 176 | 8.9 | 0.88 | 0.23 |
| 200 | 7.2 | 1.00 | 0.23 |
| 167 | 8.9 | 0.83 | 0.32 |
| 188 | 8.0 | 0.94 | 0.37 |
| 195 | 10.0 | 0.98 | 0.41 |
| 176 | 8.0 | 0.88 | 0.33 |
| 165 | 7.9 | 0.84 | 0.38 |
| 158 | 6.9 | 0.80 | 0.27 |
| 148 | 7.3 | 0.74 | 0.36 |
| 149 | 5.2 | 0.75 | 0.21 |
| 163 | 8.4 | 0.81 | 0.28 |
| 170 | 7.2 | 0.85 | 0.34 |
| 186 | 6.8 | 0.94 | 0.28 |
| 146 | 7.3 | 0.73 | 0.30 |
| 181 | 9.0 | 0.90 | 0.37 |
| 149 | 6.4 | 0.75 | 0.46 |
model {
for (i in 1:N){
dil[i] ~ dnorm(mu,tau);
}
mu ~ dnorm(0.1, 0.001)
tau ~ dgamma(0.001, 0.001)
sigma<-1/sqrt(tau)
dilbar <- mean(dil[])
}
djags <-list("N" = 19, "dil" = rats[,"DoseInLiver"])
params <- c("mu", "sigma")
rmfile <- "/Users/caterina/Documents/DataScience/SDS II/Project/rmodel1.txt"
set.seed(222)
provaR <- jags(data = djags,
model.file = rmfile,
parameters.to.save = params,
n.chains = 3, n.iter = 10000, n.burnin = 1000, n.thin=10)
## Inference for Bugs model at "/Users/caterina/Documents/DataScience/SDS II/Project/rmodel1.txt", fit using jags,
## 3 chains, each with 10000 iterations (first 1000 discarded), n.thin = 10
## n.sims = 2700 iterations saved
## mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
## mu 0.336 0.025 0.292 0.322 0.336 0.350 0.379 1.004 2700
## sigma 0.093 0.021 0.067 0.081 0.090 0.102 0.129 1.003 1000
## deviance -37.127 2.640 -39.211 -38.657 -37.833 -36.387 -31.245 1.041 2700
##
## For each parameter, n.eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
##
## DIC info (using the rule, pD = var(deviance)/2)
## pD = 3.5 and DIC = -33.6
## DIC is an estimate of expected predictive error (lower deviance is better).
model {
for (i in 1:N){
mu[i] <- alpha + beta.dose*dose[i] + beta.lw*lw[i] + beta.w*w[i]
dil[i] ~ dnorm(mu[i],tau)
}
alpha ~ dnorm(0.0,1.0E-4);
beta.dose ~ dnorm(0.0,1.0E-4)
beta.lw ~ dnorm(0.0,1.0E-4)
beta.w ~ dnorm(0.0,1.0E-4)
tau ~ dgamma(0.00001,0.00001)
sigma <-1/sqrt(tau)
dilbar <- mean(dil[])
}
djags <-list("N" = 19, "dil" = rats[,"DoseInLiver"], dose=rats[,"Dose"], w=rats[,"BodyWt"], lw=rats[,"LiverWt"] )
params <- c("mu", "sigma")
rmfile2NEW <- "/Users/caterina/Documents/DataScience/SDS II/Project/Modello2.txt"
set.seed(111)
provaR2N <- jags(data = djags,
model.file = rmfile2NEW,
parameters.to.save = params,
n.chains = 3, n.iter = 10000, n.burnin = 1000, n.thin=5)
## Inference for Bugs model at "/Users/caterina/Documents/DataScience/SDS II/Project/Modello2.txt", fit using jags,
## 3 chains, each with 10000 iterations (first 1000 discarded), n.thin = 5
## n.sims = 5400 iterations saved
## mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
## mu[1] 0.296 0.036 0.226 0.273 0.296 0.318 0.367 1.001 5400
## mu[2] 0.340 0.036 0.269 0.317 0.339 0.362 0.412 1.001 2900
## mu[3] 0.537 0.077 0.389 0.488 0.535 0.584 0.687 1.001 5400
## mu[4] 0.331 0.028 0.276 0.313 0.330 0.349 0.386 1.001 3200
## mu[5] 0.297 0.052 0.195 0.264 0.298 0.329 0.402 1.001 5400
## mu[6] 0.313 0.035 0.246 0.291 0.313 0.335 0.382 1.001 5400
## mu[7] 0.313 0.031 0.251 0.294 0.314 0.333 0.374 1.001 5400
## mu[8] 0.361 0.042 0.280 0.334 0.361 0.388 0.445 1.002 2500
## mu[9] 0.318 0.022 0.273 0.304 0.318 0.332 0.360 1.001 5400
## mu[10] 0.383 0.029 0.325 0.365 0.383 0.401 0.439 1.001 5400
## mu[11] 0.350 0.029 0.293 0.332 0.350 0.368 0.407 1.001 5400
## mu[12] 0.318 0.035 0.247 0.296 0.318 0.340 0.385 1.001 5400
## mu[13] 0.308 0.047 0.214 0.278 0.308 0.338 0.401 1.001 5400
## mu[14] 0.307 0.031 0.246 0.287 0.307 0.327 0.369 1.001 5400
## mu[15] 0.308 0.023 0.263 0.294 0.308 0.323 0.355 1.001 5400
## mu[16] 0.338 0.039 0.260 0.314 0.339 0.363 0.417 1.001 5400
## mu[17] 0.318 0.037 0.244 0.296 0.318 0.342 0.390 1.001 5400
## mu[18] 0.310 0.033 0.246 0.288 0.309 0.331 0.373 1.002 4200
## mu[19] 0.325 0.035 0.255 0.303 0.325 0.348 0.395 1.001 5400
## sigma 0.082 0.018 0.056 0.070 0.079 0.090 0.122 1.001 3300
## deviance -42.028 3.872 -46.910 -44.780 -42.770 -40.152 -32.741 1.001 5400
##
## For each parameter, n.eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
##
## DIC info (using the rule, pD = var(deviance)/2)
## pD = 7.5 and DIC = -34.5
## DIC is an estimate of expected predictive error (lower deviance is better).
model {
for (i in 1:N){
mu[i] <- beta.dose*dose[i] + beta.w*w[i]
dil[i] ~ dnorm(mu[i],tau)
}
beta.dose ~ dnorm(0.0,1.0E-4)
beta.w ~ dnorm(0.0,1.0E-4)
tau ~ dgamma(0.00001,0.00001)
sigma <-1/sqrt(tau)
dilbar <- mean(dil[])
}
suppressPackageStartupMessages(library(tidyverse))
djags <-list("N" = 19, "dil" = rats[,"DoseInLiver"], "dose"=rats[,"Dose"], w=rats[,"BodyWt"])
params <- c("mu", "sigma")
rmfileB <- "/Users/caterina/Documents/DataScience/SDS II/Project/rBOZZA.txt"
set.seed(111)
provaRB <- jags(data = djags,
model.file = rmfileB,
parameters.to.save = params,
n.chains = 3, n.iter = 10000,
n.burnin = 1000, n.thin=5)
## Inference for Bugs model at "/Users/caterina/Documents/DataScience/SDS II/Project/rBOZZA.txt", fit using jags,
## 3 chains, each with 10000 iterations (first 1000 discarded), n.thin = 5
## n.sims = 5400 iterations saved
## mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
## mu[1] 0.324 0.022 0.282 0.310 0.323 0.337 0.368 1.001 5400
## mu[2] 0.324 0.022 0.282 0.310 0.323 0.337 0.368 1.001 5400
## mu[3] 0.536 0.077 0.386 0.487 0.535 0.583 0.694 1.001 5400
## mu[4] 0.324 0.022 0.282 0.310 0.323 0.337 0.368 1.001 5400
## mu[5] 0.368 0.025 0.321 0.352 0.368 0.383 0.418 1.001 5400
## mu[6] 0.288 0.025 0.240 0.273 0.288 0.304 0.337 1.001 5400
## mu[7] 0.346 0.023 0.302 0.331 0.346 0.360 0.393 1.001 5400
## mu[8] 0.377 0.022 0.333 0.363 0.377 0.392 0.421 1.001 5400
## mu[9] 0.324 0.022 0.282 0.310 0.323 0.337 0.368 1.001 5400
## mu[10] 0.360 0.026 0.307 0.343 0.359 0.377 0.410 1.001 5400
## mu[11] 0.328 0.021 0.286 0.315 0.328 0.341 0.368 1.001 5400
## mu[12] 0.272 0.018 0.238 0.260 0.272 0.284 0.309 1.001 5400
## mu[13] 0.293 0.017 0.259 0.282 0.292 0.304 0.327 1.001 5400
## mu[14] 0.281 0.024 0.234 0.266 0.281 0.297 0.329 1.001 5400
## mu[15] 0.313 0.021 0.273 0.299 0.312 0.326 0.355 1.001 5400
## mu[16] 0.379 0.023 0.334 0.365 0.379 0.395 0.424 1.001 5400
## mu[17] 0.268 0.018 0.234 0.257 0.268 0.280 0.305 1.001 5400
## mu[18] 0.314 0.026 0.263 0.297 0.314 0.331 0.366 1.001 5400
## mu[19] 0.293 0.017 0.259 0.282 0.292 0.304 0.327 1.001 5400
## sigma 0.083 0.015 0.059 0.072 0.081 0.091 0.117 1.001 5400
## deviance -41.277 2.757 -44.293 -43.222 -41.983 -40.059 -34.533 1.001 5400
##
## For each parameter, n.eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
##
## DIC info (using the rule, pD = var(deviance)/2)
## pD = 3.8 and DIC = -37.5
## DIC is an estimate of expected predictive error (lower deviance is better).
| Parameter | Estimate |
|---|---|
| deviance | -37.1273942 |
| mu | 0.3362833 |
| sigma | 0.0929800 |
| deviance | mu | sigma | |
|---|---|---|---|
| 2.5% | -39.21057 | 0.2924261 | 0.0667109 |
| 97.5% | -31.24460 | 0.3788140 | 0.1290379 |
| Parameter | Lower | Upper |
|---|---|---|
| deviance | -39.2621249 | -32.8014235 |
| mu | 0.2911746 | 0.3760255 |
| sigma | 0.0627953 | 0.1234966 |
| MCSE | |
|---|---|
| sigma | 0.0009487 |
| mu | 0.0010911 |
##
## Stationarity start p-value
## test iteration
## deviance passed 91 0.284
## mu passed 1 0.121
## sigma passed 181 0.276
##
## Halfwidth Mean Halfwidth
## test
## deviance passed -37.1624 0.14873
## mu passed 0.3359 0.00201
## sigma passed 0.0922 0.00107
| Parameter | Estimate |
|---|---|
| deviance | -42.0278657 |
| mu[1] | 0.2958112 |
| mu[2] | 0.3396067 |
| mu[3] | 0.5366230 |
| mu[4] | 0.3308476 |
| mu[5] | 0.2973390 |
| mu[6] | 0.3131682 |
| mu[7] | 0.3133633 |
| mu[8] | 0.3609640 |
| mu[9] | 0.3177089 |
| mu[10] | 0.3830481 |
| mu[11] | 0.3500460 |
| mu[12] | 0.3176297 |
| mu[13] | 0.3075492 |
| mu[14] | 0.3073175 |
| mu[15] | 0.3082029 |
| mu[16] | 0.3384465 |
| mu[17] | 0.3183539 |
| mu[18] | 0.3095582 |
| mu[19] | 0.3250674 |
| sigma | 0.0816394 |
| deviance | mu.1. | mu.2. | mu.3. | mu.4. | mu.5. | mu.6. | mu.7. | mu.8. | mu.9. | mu.10. | mu.11. | mu.12. | mu.13. | mu.14. | mu.15. | mu.16. | mu.17. | mu.18. | mu.19. | sigma | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2.5% | -46.91012 | 0.2257369 | 0.2685986 | 0.3889525 | 0.2759738 | 0.1948379 | 0.2457324 | 0.2508640 | 0.2801546 | 0.2734672 | 0.3250413 | 0.2932919 | 0.2473465 | 0.2143995 | 0.2456262 | 0.2632824 | 0.2601138 | 0.2435837 | 0.2464823 | 0.2553899 | 0.0559972 |
| 97.5% | -32.74120 | 0.3668126 | 0.4116052 | 0.6870141 | 0.3856956 | 0.4017396 | 0.3818869 | 0.3737854 | 0.4445307 | 0.3604584 | 0.4386429 | 0.4071174 | 0.3853569 | 0.4009940 | 0.3688805 | 0.3546266 | 0.4172395 | 0.3903436 | 0.3725309 | 0.3950167 | 0.1223576 |
| Parameter | Lower | Upper |
|---|---|---|
| deviance | -47.4547842 | -34.6497240 |
| mu[1] | 0.2257453 | 0.3668198 |
| mu[2] | 0.2652035 | 0.4073765 |
| mu[3] | 0.3922607 | 0.6898625 |
| mu[4] | 0.2761408 | 0.3858701 |
| mu[5] | 0.1901484 | 0.3964135 |
| mu[6] | 0.2454026 | 0.3815348 |
| mu[7] | 0.2550755 | 0.3768064 |
| mu[8] | 0.2808934 | 0.4449314 |
| mu[9] | 0.2728321 | 0.3591409 |
| mu[10] | 0.3260452 | 0.4391970 |
| mu[11] | 0.2958232 | 0.4089543 |
| mu[12] | 0.2463924 | 0.3836431 |
| mu[13] | 0.2207491 | 0.4058921 |
| mu[14] | 0.2491961 | 0.3716476 |
| mu[15] | 0.2638937 | 0.3549653 |
| mu[16] | 0.2569591 | 0.4127968 |
| mu[17] | 0.2418388 | 0.3882575 |
| mu[18] | 0.2478540 | 0.3738539 |
| mu[19] | 0.2603219 | 0.3994407 |
| sigma | 0.0520978 | 0.1139996 |
| MCSE | |
|---|---|
| sigma | 0.0004235 |
| mu[1] | 0.0008287 |
| mu[2] | 0.0008880 |
| mu[3] | 0.0017228 |
| mu[4] | 0.0006521 |
| mu[5] | 0.0011984 |
| mu[6] | 0.0008229 |
| mu[7] | 0.0006822 |
| mu[8] | 0.0010220 |
| mu[9] | 0.0004790 |
| mu[10] | 0.0006566 |
| mu[11] | 0.0006866 |
| mu[12] | 0.0008303 |
| mu[13] | 0.0012190 |
| mu[14] | 0.0007263 |
| mu[15] | 0.0005263 |
| mu[16] | 0.0009358 |
| mu[17] | 0.0008839 |
| mu[18] | 0.0007419 |
| mu[19] | 0.0008708 |
##
## Stationarity start p-value
## test iteration
## deviance passed 1 0.6962
## mu[1] passed 1 0.6089
## mu[10] passed 1 0.6589
## mu[11] passed 1 0.7494
## mu[12] passed 1 0.9832
## mu[13] passed 1 0.1818
## mu[14] passed 1 0.2305
## mu[15] passed 1 0.8725
## mu[16] passed 1 0.6813
## mu[17] passed 1 0.9825
## mu[18] passed 1 0.1124
## mu[19] passed 1 0.5869
## mu[2] passed 1 0.0572
## mu[3] passed 1 0.7589
## mu[4] passed 1 0.0797
## mu[5] passed 1 0.9150
## mu[6] passed 1 0.1652
## mu[7] passed 1 0.5851
## mu[8] passed 181 0.0771
## mu[9] passed 1 0.2542
## sigma passed 1 0.2889
##
## Halfwidth Mean Halfwidth
## test
## deviance passed -42.0384 0.186919
## mu[1] passed 0.2956 0.001613
## mu[10] passed 0.3828 0.001305
## mu[11] passed 0.3499 0.001312
## mu[12] passed 0.3177 0.001577
## mu[13] passed 0.3075 0.002129
## mu[14] passed 0.3073 0.001446
## mu[15] passed 0.3081 0.001026
## mu[16] passed 0.3380 0.001791
## mu[17] passed 0.3184 0.001673
## mu[18] passed 0.3094 0.001535
## mu[19] passed 0.3250 0.001652
## mu[2] passed 0.3395 0.001639
## mu[3] passed 0.5358 0.003626
## mu[4] passed 0.3307 0.001287
## mu[5] passed 0.2969 0.002360
## mu[6] passed 0.3131 0.001594
## mu[7] passed 0.3131 0.001427
## mu[8] passed 0.3609 0.002001
## mu[9] passed 0.3175 0.000948
## sigma passed 0.0815 0.000836
| Parameter | Estimate |
|---|---|
| deviance | -41.2771981 |
| mu[1] | 0.3236415 |
| mu[2] | 0.3236415 |
| mu[3] | 0.5364660 |
| mu[4] | 0.3236415 |
| mu[5] | 0.3677744 |
| mu[6] | 0.2883836 |
| mu[7] | 0.3457079 |
| mu[8] | 0.3772881 |
| mu[9] | 0.3236415 |
| mu[10] | 0.3595380 |
| mu[11] | 0.3279578 |
| mu[12] | 0.2721531 |
| mu[13] | 0.2927000 |
| mu[14] | 0.2810281 |
| mu[15] | 0.3126082 |
| mu[16] | 0.3794463 |
| mu[17] | 0.2684753 |
| mu[18] | 0.3141278 |
| mu[19] | 0.2927000 |
| sigma | 0.0828396 |
| deviance | mu.1. | mu.2. | mu.3. | mu.4. | mu.5. | mu.6. | mu.7. | mu.8. | mu.9. | mu.10. | mu.11. | mu.12. | mu.13. | mu.14. | mu.15. | mu.16. | mu.17. | mu.18. | mu.19. | sigma | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2.5% | -44.29276 | 0.2824686 | 0.2824686 | 0.3860462 | 0.2824686 | 0.3209871 | 0.2400652 | 0.3017279 | 0.3334964 | 0.2824686 | 0.3067753 | 0.2861685 | 0.2375304 | 0.2588056 | 0.2335417 | 0.2728390 | 0.3335743 | 0.2343206 | 0.2625587 | 0.2588056 | 0.0593759 |
| 97.5% | -34.53272 | 0.3678029 | 0.3678029 | 0.6938051 | 0.3678029 | 0.4179578 | 0.3374968 | 0.3928803 | 0.4211845 | 0.3678029 | 0.4103350 | 0.3675300 | 0.3092888 | 0.3270207 | 0.3293190 | 0.3552641 | 0.4240970 | 0.3051092 | 0.3662891 | 0.3270207 | 0.1165395 |
| Parameter | Lower | Upper |
|---|---|---|
| deviance | -44.5104433 | -35.9176703 |
| mu[1] | 0.2806000 | 0.3654697 |
| mu[2] | 0.2806000 | 0.3654697 |
| mu[3] | 0.3833615 | 0.6889493 |
| mu[4] | 0.2806000 | 0.3654697 |
| mu[5] | 0.3188636 | 0.4153065 |
| mu[6] | 0.2365952 | 0.3335901 |
| mu[7] | 0.2997318 | 0.3903881 |
| mu[8] | 0.3344654 | 0.4219529 |
| mu[9] | 0.2806000 | 0.3654697 |
| mu[10] | 0.3090580 | 0.4119540 |
| mu[11] | 0.2887533 | 0.3699226 |
| mu[12] | 0.2359591 | 0.3073268 |
| mu[13] | 0.2609882 | 0.3288833 |
| mu[14] | 0.2300324 | 0.3251291 |
| mu[15] | 0.2710341 | 0.3530105 |
| mu[16] | 0.3362269 | 0.4264718 |
| mu[17] | 0.2327704 | 0.3031737 |
| mu[18] | 0.2589728 | 0.3621398 |
| mu[19] | 0.2609882 | 0.3288833 |
| sigma | 0.0579172 | 0.1137714 |
| MCSE | |
|---|---|
| sigma | 0.0003478 |
| mu[1] | 0.0004925 |
| mu[2] | 0.0004925 |
| mu[3] | 0.0016990 |
| mu[4] | 0.0004925 |
| mu[5] | 0.0005597 |
| mu[6] | 0.0005549 |
| mu[7] | 0.0005261 |
| mu[8] | 0.0005136 |
| mu[9] | 0.0004925 |
| mu[10] | 0.0005806 |
| mu[11] | 0.0004667 |
| mu[12] | 0.0004142 |
| mu[13] | 0.0003944 |
| mu[14] | 0.0005442 |
| mu[15] | 0.0004757 |
| mu[16] | 0.0005236 |
| mu[17] | 0.0004086 |
| mu[18] | 0.0005923 |
| mu[19] | 0.0003944 |
##
## Stationarity start p-value
## test iteration
## deviance passed 1 0.1680
## mu[1] passed 1 0.1577
## mu[10] passed 1 0.9409
## mu[11] passed 1 0.7191
## mu[12] passed 1 0.1577
## mu[13] passed 1 0.3306
## mu[14] passed 1 0.0597
## mu[15] passed 1 0.1577
## mu[16] passed 1 0.5823
## mu[17] passed 1 0.1577
## mu[18] passed 1 0.0604
## mu[19] passed 1 0.3306
## mu[2] passed 1 0.1577
## mu[3] passed 1 0.6356
## mu[4] passed 1 0.1577
## mu[5] passed 1 0.1577
## mu[6] passed 1 0.0599
## mu[7] passed 1 0.1577
## mu[8] passed 1 0.2654
## mu[9] passed 1 0.1577
## sigma passed 1 0.6905
##
## Halfwidth Mean Halfwidth
## test
## deviance passed -41.2193 0.129479
## mu[1] passed 0.3240 0.001011
## mu[10] passed 0.3595 0.001211
## mu[11] passed 0.3280 0.000965
## mu[12] passed 0.2724 0.000851
## mu[13] passed 0.2929 0.000808
## mu[14] passed 0.2814 0.001128
## mu[15] passed 0.3129 0.000977
## mu[16] passed 0.3796 0.001079
## mu[17] passed 0.2687 0.000839
## mu[18] passed 0.3145 0.001226
## mu[19] passed 0.2929 0.000808
## mu[2] passed 0.3240 0.001011
## mu[3] passed 0.5358 0.003602
## mu[4] passed 0.3240 0.001011
## mu[5] passed 0.3681 0.001149
## mu[6] passed 0.2888 0.001150
## mu[7] passed 0.3460 0.001080
## mu[8] passed 0.3775 0.001052
## mu[9] passed 0.3240 0.001011
## sigma passed 0.0829 0.000725
set.seed(123)
obs <- 50
lw <- round(runif(obs, 5,10),1)
w <- round(rnorm(obs,174, 6))
d <- round(runif(obs,0.73,1),2)
# Pick fixed values for the parameters of the model
beta.dose <- 0.001
beta.w<- 0.002
sigma <- 0.1
# Simulate response according to the model
dil <- c(); mu <- c()
for (i in 1:obs){
m <- beta.dose*d[i] + beta.w*w[i]
mu <- append(mu, m)
inLiv <- round(rnorm(1,m, sigma),2)
dil <- append(dil,abs(inLiv) )
}
dat <- data.frame(Bodyw=w, Liverw=lw, Dose=d, InLiver=dil)
rownames(dat) <- NULL
| Bodyw | Liverw | Dose | InLiver |
|---|---|---|---|
| 164 | 6.4 | 0.96 | 0.26 |
| 179 | 8.9 | 0.86 | 0.38 |
| 175 | 7.0 | 0.83 | 0.33 |
| 167 | 9.4 | 0.80 | 0.30 |
| 182 | 9.7 | 0.76 | 0.27 |
| 177 | 5.2 | 0.84 | 0.35 |
| 172 | 7.6 | 0.88 | 0.27 |
| 179 | 9.5 | 0.79 | 0.19 |
| 179 | 7.8 | 0.85 | 0.32 |
| 179 | 7.3 | 0.79 | 0.45 |
| 178 | 9.8 | 0.87 | 0.30 |
| 177 | 7.3 | 0.83 | 0.42 |
| 174 | 8.4 | 0.91 | 0.19 |
| 172 | 7.9 | 0.83 | 0.34 |
| 172 | 5.5 | 0.83 | 0.40 |
| 170 | 9.5 | 0.87 | 0.37 |
| 173 | 6.2 | 0.93 | 0.36 |
| 166 | 5.2 | 0.79 | 0.27 |
| 187 | 6.6 | 0.84 | 0.29 |
| 181 | 9.8 | 0.80 | 0.26 |
| 167 | 9.4 | 0.90 | 0.35 |
| 172 | 8.5 | 0.78 | 0.25 |
| 171 | 8.2 | 0.96 | 0.29 |
| 179 | 10.0 | 0.93 | 0.33 |
| 173 | 8.3 | 0.91 | 0.53 |
| 176 | 8.5 | 0.90 | 0.29 |
| 174 | 7.7 | 0.83 | 0.37 |
| 174 | 8.0 | 0.87 | 0.36 |
| 182 | 6.4 | 0.97 | 0.27 |
| 173 | 5.7 | 0.89 | 0.34 |
| 183 | 9.8 | 0.96 | 0.51 |
| 165 | 9.5 | 0.81 | 0.38 |
| 178 | 8.5 | 0.92 | 0.36 |
| 175 | 9.0 | 0.80 | 0.31 |
| 175 | 5.1 | 0.89 | 0.15 |
| 176 | 7.4 | 0.86 | 0.47 |
| 171 | 8.8 | 0.80 | 0.20 |
| 172 | 6.1 | 0.88 | 0.42 |
| 168 | 6.6 | 0.98 | 0.53 |
| 168 | 6.2 | 0.97 | 0.19 |
| 176 | 5.7 | 0.80 | 0.42 |
| 177 | 7.1 | 0.82 | 0.33 |
| 174 | 7.1 | 1.00 | 0.19 |
| 180 | 6.8 | 0.90 | 0.21 |
| 186 | 5.8 | 0.98 | 0.21 |
| 171 | 5.7 | 0.86 | 0.29 |
| 160 | 6.2 | 0.84 | 0.17 |
| 180 | 7.3 | 0.91 | 0.43 |
| 170 | 6.3 | 0.77 | 0.55 |
| 170 | 9.3 | 0.88 | 0.21 |
sim_jags <-list("N" = obs, "dil" = dat[,"InLiver"], "dose"=dat[,"Dose"], "w"=dat[,"Bodyw"])
paramSIM <- c( "sigma","mu")
rmfileB <- "/Users/caterina/Documents/DataScience/SDS II/Project/rBOZZA.txt"
set.seed(123)
provaSIM <- jags(data = sim_jags,
model.file = rmfileB,
parameters.to.save = paramSIM,
n.chains = 3, n.iter = 10000,
n.burnin = 1000, n.thin=5)
| Sigma | Mu | |
|---|---|---|
| Real | 0.1000000 | 0.3496286 |
| Estimated | 0.1015783 | 0.3244440 |
The main objective of this project was to carry out a Bayesian analysis on some data. Starting from a basic model, two alternatives were created. These three models were compared using the DIC (Deviance Information Criterion) and showing an improvment from model to model.
Moreover, differences and improvments can also be seen with the use of other diagnostics. For example, it’s easy to check that the MCSEs of model 3 are better than the ones of model 2, which are better than the ones of model 1. The intervals are getting generally smaller too.
We can also see that not all the parameters of model 2 had an effective sample size of 5400, like those of model 3, showing that the autocorrelation is not as low. On the other hand, we can see that all three models passed the convergence tests and the halfwidth test.
Finally, testing the model on data generated following the model itself, we were able to test the model’s choerence highlighting its ability to retrieve reliable estimators.